22 research outputs found
Neural Network Gradient Hamiltonian Monte Carlo
Hamiltonian Monte Carlo is a widely used algorithm for sampling from
posterior distributions of complex Bayesian models. It can efficiently explore
high-dimensional parameter spaces guided by simulated Hamiltonian flows.
However, the algorithm requires repeated gradient calculations, and these
computations become increasingly burdensome as data sets scale. We present a
method to substantially reduce the computation burden by using a neural network
to approximate the gradient. First, we prove that the proposed method still
maintains convergence to the true distribution though the approximated gradient
no longer comes from a Hamiltonian system. Second, we conduct experiments on
synthetic examples and real data sets to validate the proposed method
Modeling Dynamic Functional Connectivity with Latent Factor Gaussian Processes
Dynamic functional connectivity, as measured by the time-varying covariance
of neurological signals, is believed to play an important role in many aspects
of cognition. While many methods have been proposed, reliably establishing the
presence and characteristics of brain connectivity is challenging due to the
high dimensionality and noisiness of neuroimaging data. We present a latent
factor Gaussian process model which addresses these challenges by learning a
parsimonious representation of connectivity dynamics. The proposed model
naturally allows for inference and visualization of time-varying connectivity.
As an illustration of the scientific utility of the model, application to a
data set of rat local field potential activity recorded during a complex
non-spatial memory task provides evidence of stimuli differentiation
Recommended from our members
Improving Statistical Inference through Flexible Approximations
In the statistics and machine learning communities, there exists a perceived dichotomy be- tween statistical inference and out-of-sample prediction. Statistical inference is often done with models that are carefully specified a priori while out-of-sample prediction is often done with “black-box” models that have greater flexibility. The former is more concerned with model theoretical properties when data become infinite; the later focuses more on algorithms that scale up to larger data sets. To a scientist who is outside of these communities, the distinction of inference and prediction might not seem so clear. With technological advancements, scientists can now collect overwhelming amounts of data in various formats and their objective is to make sense of the data. To this end, we propose the synergy of statistical inference and prediction workhorses that are neural networks and Gaussian processes. Despite hardware improvements under Moore’s law, ever bigger data and more complex models pose computational challenges for statistical inference. To address these computational challenges, we approximate functional forms of the data to effectively reduce the burden of model evaluation. In addition, we present a case study where we use flexible models to learn scientifically interesting representations of rat memories from experimental data for better understanding of the brain
Recommended from our members
Improving Statistical Inference through Flexible Approximations
In the statistics and machine learning communities, there exists a perceived dichotomy be- tween statistical inference and out-of-sample prediction. Statistical inference is often done with models that are carefully specified a priori while out-of-sample prediction is often done with “black-box” models that have greater flexibility. The former is more concerned with model theoretical properties when data become infinite; the later focuses more on algorithms that scale up to larger data sets. To a scientist who is outside of these communities, the distinction of inference and prediction might not seem so clear. With technological advancements, scientists can now collect overwhelming amounts of data in various formats and their objective is to make sense of the data. To this end, we propose the synergy of statistical inference and prediction workhorses that are neural networks and Gaussian processes. Despite hardware improvements under Moore’s law, ever bigger data and more complex models pose computational challenges for statistical inference. To address these computational challenges, we approximate functional forms of the data to effectively reduce the burden of model evaluation. In addition, we present a case study where we use flexible models to learn scientifically interesting representations of rat memories from experimental data for better understanding of the brain